首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9266篇
  免费   838篇
  国内免费   88篇
电工技术   180篇
综合类   40篇
化学工业   2537篇
金属工艺   212篇
机械仪表   399篇
建筑科学   354篇
矿业工程   19篇
能源动力   614篇
轻工业   909篇
水利工程   187篇
石油天然气   140篇
武器工业   5篇
无线电   968篇
一般工业技术   1609篇
冶金工业   209篇
原子能技术   67篇
自动化技术   1743篇
  2024年   17篇
  2023年   186篇
  2022年   284篇
  2021年   599篇
  2020年   550篇
  2019年   681篇
  2018年   775篇
  2017年   732篇
  2016年   722篇
  2015年   430篇
  2014年   711篇
  2013年   1039篇
  2012年   651篇
  2011年   732篇
  2010年   474篇
  2009年   405篇
  2008年   245篇
  2007年   181篇
  2006年   148篇
  2005年   101篇
  2004年   101篇
  2003年   58篇
  2002年   58篇
  2001年   29篇
  2000年   23篇
  1999年   26篇
  1998年   22篇
  1997年   18篇
  1996年   19篇
  1995年   19篇
  1994年   10篇
  1993年   15篇
  1992年   9篇
  1991年   16篇
  1990年   16篇
  1989年   12篇
  1988年   7篇
  1987年   7篇
  1986年   7篇
  1985年   8篇
  1984年   13篇
  1983年   13篇
  1982年   4篇
  1981年   3篇
  1980年   2篇
  1979年   5篇
  1978年   3篇
  1977年   2篇
  1973年   2篇
  1967年   1篇
排序方式: 共有10000条查询结果,搜索用时 271 毫秒
91.
In cloud computing, services play key roles. Services are well defined and autonomous components. Nowadays, the demand of using Fuzzy inference as a service is increasing in the domain of complex and critical systems. In such systems, along with the development of the software, the cost of detecting and fixing software defects increases. Therefore, using formal methods, which provide clear, concise, and mathematical interpretation of the system, is crucial for the design of these Fuzzy systems. To obtain this goal, we introduce the Fuzzy Inference Cloud Service (FICS) and propose a novel discipline for formal modeling of the FICS. The FICS provides the service of Fuzzy inference to the consumers. We also introduce four novel formal verification tests, which allow strict analysis of certain behavioral disciplines in the FICS as follows: (1) Internal consistency, which analyzes the service in a strict and delicate manner; (2) Deadlock freeness; (3) Divergence freeness; and (4) Goal reach ability. The four tests are discussed and the FICS is verified to ensure that it can pass all these tests.  相似文献   
92.
A web operating system is an operating system that users can access from any hardware at any location. A peer-to-peer (P2P) grid uses P2P communication for resource management and communication between nodes in a grid and manages resources locally in each cluster, and this provides a proper architecture for a web operating system. Use of semantic technology in web operating systems is an emerging field that improves the management and discovery of resources and services. In this paper, we propose PGSW-OS (P2P grid semantic Web OS), a model based on a P2P grid architecture and semantic technology to improve resource management in a web operating system through resource discovery with the aid of semantic features. Our approach integrates distributed hash tables (DHTs) and semantic overlay networks to enable semantic-based resource management by advertising resources in the DHT based upon their annotations to enable semantic-based resource matchmaking. Our model includes ontologies and virtual organizations. Our technique decreases the computational complexity of searching in a web operating system environment. We perform a simulation study using the Gridsim simulator, and our experiments show that our model provides enhanced utilization of resources, better search expressiveness, scalability, and precision.  相似文献   
93.
The lumped parameter/complex plane analysis technique revealed several contributions to the terminal admittance of the ZnO—Bi2O3 based varistor grain-boundary ac response. The terminal capacitance has been elucidated via the multiple trapping phenomena, a barrier layer polarization, and a resonance effect in the frequency range 10−2≤ f ≤ 109 Hz. The characterization of the trapping relaxation behavior near ∼ 105 Hz (∼ 10−6 s) provided a better understanding of a previously reported loss-peak. The possible nonuniformity in this trapping activity associated with its conductance term observed via the depression angle of a semicircular relaxation in the complex capacitance ( C *) plane has been postulated.  相似文献   
94.
With the high availability of digital video contents on the internet, users need more assistance to access digital videos. Various researches have been done about video summarization and semantic video analysis to help to satisfy these needs. These works are developing condensed versions of a full length video stream through the identification of the most important and pertinent content within the stream. Most of the existing works in these areas are mainly focused on event mining. Event mining from video streams improves the accessibility and reusability of large media collections, and it has been an active area of research with notable recent progress. Event mining includes a wide range of multimedia domains such as surveillance, meetings, broadcast, news, sports, documentary, and films, as well as personal and online media collections. Due to the variety and plenty of Event mining techniques, in this paper we suggest an analytical framework to classify event mining techniques and to evaluate them based on important functional measures. This framework could lead to empirical and technical comparison of event mining methods and development of more efficient structures at future.  相似文献   
95.
Robot manufacturers will be required to demonstrate objectively that all reasonably foreseeable hazards have been identified in any robotic product design that is to be marketed commercially. This is problematic for autonomous mobile robots because conventional methods, which have been developed for automatic systems do not assist safety analysts in identifying non-mission interactions with environmental features that are not directly associated with the robot’s design mission, and which may comprise the majority of the required tasks of autonomous robots. In this paper we develop a new variant of preliminary hazard analysis that is explicitly aimed at identifying non-mission interactions by means of new sets of guidewords not normally found in existing variants. We develop the required features of the method and describe its application to several small trials conducted at Bristol Robotics Laboratory in the 2011–2012 period.  相似文献   
96.
Cloud computing techniques take the form of distributed computing by utilizing multiple computers to execute computing simultaneously on the service side. To process the increasing quantity of multimedia data, numerous large-scale multimedia data storage computing techniques in the cloud computing have been developed. Of all the techniques, Hadoop plays a key role in the cloud computing. Hadoop, a computing cluster formed by low-priced hardware, can conduct the parallel computing of petabytes of multimedia data. Hadoop features high-reliability, high-efficiency, and high-scalability. The numerous large-scale multimedia data computing techniques include not only the key core techniques, Hadoop and MapReduce, but also the data collection techniques, such as File Transfer Protocol and Flume. In addition, distributed system configuration allocation, automatic installation, and monitoring platform building and management techniques are all included. As a result, only with the integration of all the techniques, a reliable large-scale multimedia data platform can be offered. In this paper, we introduce how cloud computing can make a breakthrough by proposing a multimedia social network dataset on Hadoop platform and implementing a prototype version. Detailed specifications and design issues are discussed as well. An important finding of this article is that we can save more time if we conduct the multimedia social networking analysis using Cloud Hadoop Platform rather than using a single computer. The advantages of cloud computing over the traditional data processing practices are fully demonstrated in this article. The applicable framework designs and the tools available for the large-scale data processing are also proposed. We show the experimental multimedia data including data sizes and processing time.  相似文献   
97.
A new variant of Differential Evolution (DE), called ADE-Grid, is presented in this paper which adapts the mutation strategy, crossover rate (CR) and scale factor (F) during the run. In ADE-Grid, learning automata (LA), which are powerful decision making machines, are used to determine the proper value of the parameters CR and F, and the suitable strategy for the construction of a mutant vector for each individual, adaptively. The proposed automata based DE is able to maintain the diversity among the individuals and encourage them to move toward several promising areas of the search space as well as the best found position. Numerical experiments are conducted on a set of twenty four well-known benchmark functions and one real-world engineering problem. The performance comparison between ADE-Grid and other state-of-the-art DE variants indicates that ADE-Grid is a viable approach for optimization. The results also show that the proposed ADE-Grid improves the performance of DE in terms of both convergence speed and quality of final solution.  相似文献   
98.
In the present study, the Group method of data handling (GMDH) network was utilized to predict the scour depth below pipelines. GMDH network was developed using back propagation. Input parameters that were considered as effective parameters on the scour depth included those of sediment size, geometry of pipeline, and approaching flow characteristics. Training and testing performances of the GMDH networks have been carried out using nondimensional data sets that were collected from the literature. These data sets are related to the two main situations of pipelines scour experiments namely clear-water and live-bed conditions. The testing results of performances were compared with the support vector machines (SVM) and existing empirical equations. The GMDH network indicated that using of back propagation produced lower error of scour depth prediction than those obtained using the SVM and empirical equations. Also, the effects of many input parameters on the scour depth have been investigated.  相似文献   
99.
In this paper, a novel algorithm for image encryption based on hash function is proposed. In our algorithm, a 512-bit long external secret key is used as the input value of the salsa20 hash function. First of all, the hash function is modified to generate a key stream which is more suitable for image encryption. Then the final encryption key stream is produced by correlating the key stream and plaintext resulting in both key sensitivity and plaintext sensitivity. This scheme can achieve high sensitivity, high complexity, and high security through only two rounds of diffusion process. In the first round of diffusion process, an original image is partitioned horizontally to an array which consists of 1,024 sections of size 8 × 8. In the second round, the same operation is applied vertically to the transpose of the obtained array. The main idea of the algorithm is to use the average of image data for encryption. To encrypt each section, the average of other sections is employed. The algorithm uses different averages when encrypting different input images (even with the same sequence based on hash function). This, in turn, will significantly increase the resistance of the cryptosystem against known/chosen-plaintext and differential attacks. It is demonstrated that the 2D correlation coefficients (CC), peak signal-to-noise ratio (PSNR), encryption quality (EQ), entropy, mean absolute error (MAE) and decryption quality can satisfy security and performance requirements (CC <0.002177, PSNR <8.4642, EQ >204.8, entropy >7.9974 and MAE >79.35). The number of pixel change rate (NPCR) analysis has revealed that when only one pixel of the plain-image is modified, almost all of the cipher pixels will change (NPCR >99.6125 %) and the unified average changing intensity is high (UACI >33.458 %). Moreover, our proposed algorithm is very sensitive with respect to small changes (e.g., modification of only one bit) in the external secret key (NPCR >99.65 %, UACI >33.55 %). It is shown that this algorithm yields better security performance in comparison to the results obtained from other algorithms.  相似文献   
100.
We have been developed novel catalysts for gasification of biomass with much higher energy efficiency than conventional methods (non-catalyst, dolomite, commercial steam reforming Ni catalyst). From the result of the gasification of cellulose over novel Rh/CeO2/SiO2 catalysts, it is found that the gasification process consists of the reforming of tar and the combustion of solid carbon. We also tested novel Rh/CeO2/SiO2 in the gasification with air, pyrogasification, and steam reforming of cedar wood. As a result, Rh/CeO2/SiO2 gave higher yield of syngas than the conventional steam reforming Ni catalyst. Furthermore, we compared the performance between single and dual bed reactors. Single bed reactor was effective in the gasification of cedar, however, it was not suitable for the gasification of rice straw since a rapid deactivation was observed. Gasification of rice straw, jute stick, baggase using the fluidized dual-bed reactor and Rh/CeO2/SiO2 was also investigated. Especially, the catalyst stability in the gasification of rice straw clearly was enhanced by using the fluidized dual bed reactor.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号